96 research outputs found

    A variable neighborhood search algorithm for the constrained task allocation problem

    Get PDF
    A Variable Neighborhood Search algorithm is proposed for solving a task allocation problem whose main characteristics are: (i) each task requires a certain amount of resources and each processor has a finite capacity to be search between task it is assigned; (ii) the cost of solutions includes fixed cost when using processors, assigning cost and communication cost between task assigned to different processors. A computational experiment shows that the algorithm is satisfactory in terms of time and solution qualit

    Mining whole sample mass spectrometry proteomics data for biomarkers: an overview

    No full text
    In this paper we aim to provide a concise overview of designing and conducting an MS proteomics experiment in such a way as to allow statistical analysis that may lead to the discovery of novel biomarkers. We provide a summary of the various stages that make up such an experiment, highlighting the need for experimental goals to be decided upon in advance. We discuss issues in experimental design at the sample collection stage, and good practise for standardising protocols within the proteomics laboratory. We then describe approaches to the data mining stage of the experiment, including the processing steps that transform a raw mass spectrum into a useable form. We propose a permutation-based procedure for determining the significance of reported error rates. Finally, because of its general advantages in speed and cost, we suggest that MS proteomics may be a good candidate for an early primary screening approach to disease diagnosis, identifying areas of risk and making referrals for more specific tests without necessarily making a diagnosis in its own right. Our discussion is illustrated with examples drawn from experiments on bovine blood serum conducted in the Centre for Proteomic Research (CPR) at Southampton University

    Train Scheduling and Rescheduling in the UK with a Modified Shifting Bottleneck Procedure

    Get PDF
    This paper introduces a modified shifting bottleneck approach to solve train scheduling and rescheduling problems. The problem is formulated as a job shop scheduling model and a mixed integer linear programming model is also presented. The shifting bottleneck procedure is a well-established heuristic method for obtaining solutions to the job shop and other machine scheduling problems. We modify the classical shifting bottleneck approach to make it suitable for the types of job shop problem that arises in train scheduling. The method decomposes the problem into several single machine problems. Different variations of the method are considered with regard to solving the single machine problems. We compare and report the performance of the algorithms for a case study based on part of the UK railway network

    The Vehicle Routing Problem with Release and Due Dates

    Get PDF

    A genetic algorithm for two-dimensional bin packing with due dates

    Get PDF
    This paper considers a new variant of the two-dimensional bin packing problem where each rectangle is assigned a due date and each bin has a fixed processing time. Hence the objective is not only to minimize the number of bins, but also to minimize the maximum lateness of the rectangles. This problem is motivated by the cutting of stock sheets and the potential increased efficiency that might be gained by drawing on a larger pool of demand pieces by mixing orders, while also aiming to ensure a certain level of customer service. We propose a genetic algorithm for searching the solution space, which uses a new placement heuristic for decoding the gene based on the best fit heuristic designed for the strip packing problems. The genetic algorithm employs an innovative crossover operator that considers several different children from each pair of parents. Further, the dual objective is optimized hierarchically with the primary objective periodically alternating between maximum lateness and number of bins. As a result, the approach produces several non-dominated solutions with different trade-offs. Two further approaches are implemented. One is based on a previous Unified Tabu Search, suitably modified to tackle this revised problem. The other is randomized descent and serves as a benchmark for comparing the results. Comprehensive computational results are presented, which show that the Unified Tabu Search still works well in minimizing the bins, but the genetic algorithm performs slightly better. When also considering maximum lateness, the genetic algorithm is considerably better

    Using the spatial population abundance dynamics engine for conservation management

    Get PDF
    1. An explicit spatial understanding of population dynamics is often critical for effective management of wild populations. Sophisticated approaches are available to simulate these dynamics, but are largely either spatially homogeneous or agentbased, and thus best suited to small spatial or temporal scales. These approaches also often ignore financial decisions crucial to choosing management approaches on the basis of cost-effectiveness. 2. We created a user-friendly and flexible modelling framework for simulating these population issues at large spatial scales – the Spatial Population Abundance Dynamics Engine (SPADE). SPADE is based on the STAR model (McMahon et al. 2010) and uses a reaction-diffusion approach to model population trajectories and a cost-benefit analysis technique to calculate optimal management strategies over long periods and across broad spatial scales. It expands on STAR by incorporating species interactions and multiple concurrent management strategies, and by allowing full user control of functional forms and parameters. 3. We used SPADE to simulate the eradication of feral domestic cats Felis catus on sub-Antarctic Marion Island (Bester et al. 2002) and compared modelled outputs to observed data. The parameters of the best-fitting model reflected the conditions of the management programme, and the model successfully simulated the observed movement of the cat population to the southern and eastern portion of the island under hunting pressure. We further demonstrated that none of the management strategies would likely have been successful within a reasonable timeframe if performed in isolation. 4. SPADE is applicable to a wide range of population management problems, and allows easy generation, modification and analysis of management scenarios. It is a useful tool for the planning, evaluation and optimisation of the management of wild populations, and can be used without specialised training.Appendix S1. SPADE manual.Appendix S2. Details of algorithms used in SPADE.Appendix S3. Details of statistical models.Appendix S4. Source code for SPADE package.Appendix S5. Description of potential issues in using STAR.The development of SPADE was aided extensively by input from the Australian Alps National Parks Cooperative Management Programme’s Feral Horse Working Group, including participants from Parks Victoria, the NSW National Parks and Wildlife Service, the ACT Parks and Conservation Service and Forestry Corporation NSW.http://onlinelibrary.wiley.com/journal/10.1111/(ISSN)2041-210Xhb2017Mammal Research Institut

    The effect of graphite and carbon black ratios on conductive ink performance

    Get PDF
    Conductive inks based on graphite and carbon black are used in a host of applications including energy storage, energy harvesting, electrochemical sensors and printed heaters. This requires accurate control of electrical properties tailored to the application; ink formulation is a fundamental element of this. Data on how formulation relates to properties have tended to apply to only single types of conductor at any time, with data on mixed types of carbon only empirical thus far. Therefore, screen printable carbon inks with differing graphite, carbon black and vinyl polymer content were formulated and printed to establish the effect on rheology, deposition and conductivity. The study found that at a higher total carbon loading ink of 29.4% by mass, optimal conductivity (0.029 Ω cm) was achieved at a graphite to carbon black ratio of 2.6 to 1. For a lower total carbon loading (21.7 mass %), this ratio was reduced to 1.8 to 1. Formulation affected viscosity and hence ink transfer and also surface roughness due to retention of features from the screen printing mesh and the inherent roughness of the carbon components, as well as the ability of features to be reproduced consistently

    Construct-level predictive validity of educational attainment and intellectual aptitude tests in medical student selection: meta-regression of six UK longitudinal studies

    Get PDF
    Background: Measures used for medical student selection should predict future performance during training. A problem for any selection study is that predictor-outcome correlations are known only in those who have been selected, whereas selectors need to know how measures would predict in the entire pool of applicants. That problem of interpretation can be solved by calculating construct-level predictive validity, an estimate of true predictor-outcome correlation across the range of applicant abilities. Methods: Construct-level predictive validities were calculated in six cohort studies of medical student selection and training (student entry, 1972 to 2009) for a range of predictors, including A-levels, General Certificates of Secondary Education (GCSEs)/O-levels, and aptitude tests (AH5 and UK Clinical Aptitude Test (UKCAT)). Outcomes included undergraduate basic medical science and finals assessments, as well as postgraduate measures of Membership of the Royal Colleges of Physicians of the United Kingdom (MRCP(UK)) performance and entry in the Specialist Register. Construct-level predictive validity was calculated with the method of Hunter, Schmidt and Le (2006), adapted to correct for right-censorship of examination results due to grade inflation. Results: Meta-regression analyzed 57 separate predictor-outcome correlations (POCs) and construct-level predictive validities (CLPVs). Mean CLPVs are substantially higher (.450) than mean POCs (.171). Mean CLPVs for first-year examinations, were high for A-levels (.809; CI: .501 to .935), and lower for GCSEs/O-levels (.332; CI: .024 to .583) and UKCAT (mean = .245; CI: .207 to .276). A-levels had higher CLPVs for all undergraduate and postgraduate assessments than did GCSEs/O-levels and intellectual aptitude tests. CLPVs of educational attainment measures decline somewhat during training, but continue to predict postgraduate performance. Intellectual aptitude tests have lower CLPVs than A-levels or GCSEs/O-levels. Conclusions: Educational attainment has strong CLPVs for undergraduate and postgraduate performance, accounting for perhaps 65% of true variance in first year performance. Such CLPVs justify the use of educational attainment measure in selection, but also raise a key theoretical question concerning the remaining 35% of variance (and measurement error, range restriction and right-censorship have been taken into account). Just as in astrophysics, ‘dark matter’ and ‘dark energy’ are posited to balance various theoretical equations, so medical student selection must also have its ‘dark variance’, whose nature is not yet properly characterized, but explains a third of the variation in performance during training. Some variance probably relates to factors which are unpredictable at selection, such as illness or other life events, but some is probably also associated with factors such as personality, motivation or study skills
    corecore